110 research outputs found

    Traitement de données bioinformatiques massives (Big Data)

    Get PDF
    The volumes of bioinformatics data available on the Web are constantly increasing.Access and joint exploitation of these highly distributed data (i.e, available in distributed Webdata sources) and highly heterogeneous (in text or tabulated les including images, in dierentformats, described with dierent levels of detail and dierent levels of quality ...) is essential forthe biological knowledge to progress. The purpose of this short report is to present in a simpleway the problems of the joint use of bioinformatics data.Les volumes des donnees bioinformatiques disponibles sur le Web sont en constanteaugmentation. L'acces et l'exploitation conjointe de ces donnees tres reparties (i.e., disponiblesdans des sources de donnees distribuees sur le Web) et fortement heterogenes (sous forme textuelleou sous forme de chiers tabules, incluant ou non des images, decrites avec dierents niveaux dedetails et de qualite. . . ), est essentielle pour que les connaissances en biologie puissent progresser.L'objectif de ce rapport est de presenter de facon simple les problemes poses par l'utilisationconjointe des donnees bioinformatiques

    Data Integration in the Life Sciences: Scientific Workflows, Provenance, and Ranking

    Get PDF
    Biological research is a science which derives its findings from the proper analysis of experiments. Today, a large variety of experiments are carried-out in hundreds of labs around the world, and their results are reported in a myriad of different databases, web-sites, publications etc., using different formats, conventions, and schemas. Providing a uniform access to these diverse and distributed databases is the aim of data integration solutions, which have been designed and implemented within the bioinformatics community for more than 20 years. However, the perception of the problem of data integration research in the life sciences has changed: While early approaches concentrated on handling schema-dependent queries over heterogeneous and distributed databases, current research emphasizes instances rather than schemas, tries to place the human back into the loop, and intertwines data integration and data analysis. Transparency -- providing users with the illusion that they are using a centralized database and thus completely hiding the original databases -- was one of the main goals of federated databases. It is not a target anymore. Instead, users want to know exactly which data from which source was used in which way in studies (Provenance). The old model of "first integrate, then analyze" is replaced by a new, process-oriented paradigm: "integration is analysis - and analysis is integration". This paradigm change gives rise to some important research trends. First, the process of integration itself, i.e., the integration workflow, is becoming a research topic in its own. Scientific workflows actually implement the paradigm "integration is analysis". A second trend is the growing importance of sensible ranking, because data sets grow and grow and it becomes increasingly difficult for the biologist user to distinguish relevant data from large and noisy data sets. This HDR thesis outlines my contributions to the field of data integration in the life sciences. More precisely, my work takes place in the first two contexts mentioned above, namely, scientific workflows and biological data ranking. The reported results were obtained from 2005 to late 2014, first as a postdoctoral fellow at the Uniersity of Pennsylvania (Dec 2005 to Aug 2007) and then as an Associate Professor at Université Paris-Sud (LRI, UMR CNRS 8623, Bioinformactics team) and Inria (Saclay-Ile-de-France, AMIB team 2009-2014)

    RĂ©Ă©criture de workflows scientifiques et provenance

    Get PDF
    National audienceLes systèmes de workflow sont nombreux et disposent de modules de gestion de provenance qui collectent les informations relatives aux exécutions (données consommées et produites) permettant d'assurer la reproductibilité d'une expérience. Un grand nombre d'approches s'est développé pour aider à la gestion de ces masses de données de provenance. Un certain nombre de ces approches ont une bonne complexité parce qu'elles sont dédiées à des structures de workflows série-parallèles. Réécrire un workflow en un workflow série-parallèle permettrait donc de mieux exploiter l'ensemble des outils de provenance existants. Nos contributions sont : (i) introduction de la notion de réécriture de workflow provenance-equivalence, (ii) revue de transformations de graphes, (iii) conception de l'algorithme de réécriture SPFlow préservant la provenance (iv) évaluation de notre approche sur un millier de workflows

    OpenAlea: Scientific Workflows Combining Data Analysis and Simulation

    Get PDF
    International audienceAnalyzing biological data (e.g., annotating genomes, assembling NGS data...) may involve very complex and inter-linked steps where several tools are combined together. Scientific workflow systems have reached a level of maturity that makes them able to support the design and execution of such in-silico experiments, and thus making them increasingly popular in the bioinformatics community. However, in some emerging application domains such as system biology, developmental biology or ecology, the need for data analysis is combined with the need to model complex multi-scale biological systems, possibly involving multiple simulation steps. This requires the scientific work-flow to deal with retro-action to understand and predict the relationships between structure and function of these complex systems. OpenAlea (openalea.gforge.inria.fr) is the only scientific workflow system able to uniformly address the problem, which made it successful in the scientific community. One of its main originality is to introduce higher-order dataflows as a means to uniformly combine classical data analysis with modeling and simulation. In this demonstration paper, we provide for the first time the description of the OpenAlea system involving an original combination of features. We illustrate the demonstration on a high-throughput workflow in phenotyping, phenomics, and environmental control designed to study the interplay between plant architecture and climatic change

    Privacy Issues in Scientific Workflow Provenance

    Get PDF
    A scientific workflow often deals with proprietary modules as well as private or confidential data, such as health or medical information. Hence providing exact answers to provenance queries over all executions of the workflow may reveal private information. In this paper we first study the potential privacy issues in a scientific workflow – module privacy, data privacy, and provenance privacy, and frame several natural questions: (i) can we formally analyze module, data or provenance privacy giving provable privacy guarantees for an unlimited/bounded number of provenance queries? (ii) how can we answer provenance queries, providing as much information as possible to the user while still guaranteeing the required privacy? Then we look at module privacy in detail and propose a formal model from our recent work in [11]. Finally we point to several directions for future work

    Layer Decomposition: An Effective Structure-based Approach for Scientific Workflow Similarity

    Get PDF
    International audienceScientific workflows have become a valuable tool for large-scale data processing and analysis. This has led to the creation of specialized online repositories to facilitate workflow sharing and reuse. Over time, these repositories have grown to sizes that call for advanced methods to support workflow discovery, in particular for effective similarity search. Here, we present a novel and intuitive workflow similarity measure that is based on layer decomposition. Layer decomposition accounts for the directed dataflow underlying scientific workflows, a property which has not been adequately considered in previous methods. We comparatively evaluate our algorithm using a gold standard for 24 query workflows from a repository of almost 1500 scientific workflows, and show that it a) delivers the best results for similarity search, b) has a much lower runtime than other, often highly complex competitors in structure-aware workflow comparison, and c) can be stacked easily with even faster, structure-agnostic approaches to further reduce runtime while retaining result quality

    Effective and Efficient Similarity Search in Scientific Workflow Repositories

    Get PDF
    International audienceScientific workflows have become a valuable tool for large-scale data processing and analysis. This has led to the creation of specialized online repositories to facilitate worflkow sharing and reuse. Over time, these repositories have grown to sizes that call for advanced methods to support workflow discovery, in particular for similarity search. Effective similarity search requires both high quality algorithms for the comparison of scientific workflows and efficient strategies for indexing, searching, and ranking of search results. Yet, the graph structure of scientific workflows poses severe challenges to each of these steps. Here, we present a complete system for effective and efficient similarity search in scientific workflow repositories, based on the Layer Decompositon approach to scientific workflow comparison. Layer Decompositon specifically accounts for the directed dataflow underlying scientific workflows and, compared to other state-of-the-art methods, delivers best results for similarity search at comparably low runtimes. Stacking Layer Decomposition with even faster, structure-agnostic approaches allows us to use proven, off-the-shelf tools for workflow indexing to further reduce runtimes and scale similarity search to sizes of current repositories

    Distilling Structure in Scientific Workflows

    Get PDF
    International audienceIn this work, we have conducted a series of experiments to better understand the structure of scientific workflows. In particular, we have investigated techniques to understand why scientific workflows may or may not have a series-parallel structure

    Une autocomplétion générique de SPARQL dans un contexte multi-services

    Get PDF
    National audienceSPARQL s'est imposé comme le langage de requêtes le plus utilisé pour accéder aux masses de données RDF disponibles sur le Web. Néanmoins, rédiger une requête en SPARQL peut se révéler fastidieux, y compris pour des utilisateurs expérimentés. Cela tient souvent d'une maîtrise imparfaite par l'utilisateur des ontologies impliquées pour décrire les connaissances. Pour pallier ce problème, un nombre croissant d'éditeurs de requêtes SPARQL proposent des fonctionnalités d'autocomplétion qui restent limitées car souvent associées à un unique champ et toujours associées à un service SPARQL fixé. Dans cet article, nous démontrons, au travers d'une expérimentation, une approche permettant de proposer des complétions d'une requête en cours de rédaction en exploitant de nombreux types d'autocomplétion et ce dans un contexte multi-services. Cette expérimentation s'appuie sur un éditeur SPARQL auquel nous avons ajouté des mécanismes d'autocomplétion qui supportent une ontologie en perpétuelle évolution, ici avec la base de connaissances collaborative de Wikidata

    Path-based systems to guide scientists in the maze of biological data sources

    Get PDF
    Fueled by novel technologies capable of producing massive amounts of data for a single experiment, scientists are faced with an explosion of information which must be rapidly analyzed and combined with other data to form hypotheses and create knowledge. Today, numerous biological questions can be answered without entering a wet lab. Scientific protocols designed to answer these questions can be run entirely on a computer. Biological resources are often complementary, focused on different objects and reflecting various experts\u27 points of view. Exploiting the richness and diversity of these resources is crucial for scientists. However, with the increase of resources, scientists have to face the problem of selecting sources and tools when interpreting their data. In this paper, we analyze the way in which biologists express and implement scientific protocols, and we identify the requirements for a system which can guide scientists in constructing protocols to answer new biological questions. We present two such systems, BioNavigation and BioGuide dedicated to help scientists select resources by following suitable paths within the growing network of interconnected biological resources
    • …
    corecore